Your browser doesn't support javascript.
Montrer: 20 | 50 | 100
Résultats 1 - 3 de 3
Filtre
Ajouter des filtres

Base de données
Année
Type de document
Gamme d'année
1.
9th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2022 ; 2022.
Article Dans Anglais | Scopus | ID: covidwho-2213395

Résumé

The aim of this study is to automate the detection of COVID-19 patients by analysing the acoustic information embedded in cough samples. COVID-19 is a respiratory disease having cough acoustics as a common symptom and indicator. The primary focus is classification of generated deep features from analytical and mathematical representation of cough acoustics using signal processing techniques Mel-frequency cepstral coefficients (MFCCs) and Mel-spectrogram. MFCCs provides feature vector representation of cough signal and is used as an input for deep neural network (DNN) to generate deep features. Transfer Learning ResNet-50 based Convolutional Neural Network (CNN) model is used to generate deep features from image representation of cough in the form of Mel Spectrogram. Dataset labelling is done with two categories of COVID-19 and Non-COVID-19 classes. Among them, we have used 70% of the dataset for training and 30% for testing purposes. The deep features generated from MFCCs and Mel Spectrograms are concatenated along with a feature value output from a DNN having Metadata as input. The final concatenated feature vector is sent for Softmax based classification. By completing the whole process, we obtained the training AUC (Area Under Curve) (ROC) 95.39%, validation AUC as 88.19% and testing AUC as 88.76%. The analysis of final AUC with epoch curve shows constant increase in training AUC and convergence of validation and testing AUC at certain value representing model training as perfectly fit and no overfitting-underfitting problem. © 2022 IEEE.

2.
9th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2022 ; 2022.
Article Dans Anglais | Scopus | ID: covidwho-2213393

Résumé

The COVID-19 pandemic bestows global challenges surpassing boundaries of country, religion race, and economy. Testing of COVID-19 patients conditions are remains a challenging task due to the lack of adequate medical supplies, well-trained personnel and conducting reverse transcription polymerase chain reaction (RT-PCR) testing is expensive, long-drown-out process violates social distancing. In this direction, we used microbiologically confirmed COVID-19 dataset based on cough recordings from Coswara dataset. The Coswara dataset is also one of the open challenge dataset for researchers to investigate sound recordings of the Coswara dataset, collected from COVID-19 infected and non-COVID-19 individuals, for classification between Positive and Negative detection. These COVID-19 recordings were collected from multiple countries, through the provided crowd-sourcing website. Here, our work mainly focuses on cough sound based recordings. The dataset is released open access. We developed an acoustic biosignature feature extractors to screen for potential problems from cough recordings, and provide personalized advice to a particular patient's state to monitor his suitable condition in real-time. In our work, cough sound recordings are converted into Mel Frequency Cepstral Coefficients (MFCCs) and passed through a Gaussian Mixture Model (GMM) based pattern recognition, decision making based on a binary pre-screening diagnostic. When validated with infected and non-infected patients, for a two-class classification, using a Coswara dataset. The GMM is applied for developing a model for detection of biomarker based detection and achieves COVID-19 and non-COVID-19 patients accuracy of 73.22% based on the Coswara dataset and also compared with existing classifiers. © 2022 IEEE.

3.
9th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2022 ; 2022.
Article Dans Anglais | Scopus | ID: covidwho-2213390

Résumé

The aim of this study is to automate the detection of COVID-19 patients by analysing the acoustic information embedded in cough speech sounds. COVID-19 mainly affects the peoples respiratory system. Then accordingly respiratory-related speech sounds have the potential to hold important information to detect the cause. The principal aim of the Coswara dataset is to analyse the spectrogram representation of cough sound samples and investigate whether COVID-19 patients are improve the frequency content of produced sound signals. Coronavirus effects the whole respiratory system and consequently respiratory-related signals such as cough can possibly contain potential data for the distinctive presence of COVID-19 patients. The Coswara dataset is recorded with labeled categorized into cough sounds with final labeling into Covid-19 positive and Non-Covid classes. Among dataset, we have considered 70% dataset for training and 30% for testing purpose. The Mel Frequency Cepstral Coefficients (MFCCs) are used to extract characteristics of cough sound features from cough speech samples. These features are then used as learning input for machines to label using the self-designed Convolutional Neural Network (CNN) model. After completing the whole process, we got the training AUC (Area Under Curve) for (ROC) 98.84%, validation AUC as 88.23% and testing AUC as 87.09%. © 2022 IEEE.

SÉLECTION CITATIONS
Détails de la recherche